专利摘要:
The present invention applies motion capture technology to posture correction system in the field of electronic games or sports, and captures the user's motion in real time, compares the reference motion with the current motion, analyzes the result, and informs the result to induce the user's interest. The present invention relates to a three-dimensional motion analysis system that allows a general user to use any computer regardless of place and time by providing information such as reference motion using an internet communication network. A reference motion providing device that receives a reference motion from an actor and makes a database with music files into 3D digital data and provides it to the user through a communication network, and calls the reference motion to the user by being connected to the reference motion providing device through a communication network. After the user's motion is performed, the user's current motion is compared with the reference motion of the original actor, and the result is displayed at the angle desired by the user with a view point at the front, back, left, and right views. It includes a plurality of motion analysis device for informing the user of the difference value for each entity as a score.
公开号:KR20010095900A
申请号:KR1020000019366
申请日:2000-04-12
公开日:2001-11-07
发明作者:박명수
申请人:박명수;주식회사 닷에이스;
IPC主号:
专利说明:

3D motion capture analysis system and its method {3D Motion Capture analysis system and its analysis method}
[19] The present invention relates to a three-dimensional motion analysis system, and more particularly, to a three-dimensional motion analysis system that can be applied to the posture correction system of the electronic game or sports following the motion to compare and analyze the reference motion and the current motion. It is about.
[20] Research on aircraft simulators during World War II and computer graphics technology, which began in the 1960s, are increasingly developing into virtual reality technology thanks to the rapid development of computer hardware and interface equipment.
[21] In general, virtual reality (VR) refers to the interaction of human senses (visual, auditory, tactile, taste, and olfactory) in cyber space built using computers, and in real life activities, spatial and physical It is a new paradigm of information activities that allows indirect experiences of situations not directly experienced by constraints.
[22] Recently, a human-centered user interface technology using the five senses of human beings has been researched and developed as a core element technology as a method for further realizing the virtual reality world. Among them, visual and auditory interface technology has been developed to the extent that it can reproduce more than 80 levels of reality.
[23] The virtual reality technology as described above has been attempted to be applied in various fields, for example, multimedia contents, electronic games, movies, various education and training systems, and the like. As the film and game industries develop and three-dimensional modeling, rendering, and animation techniques are introduced, interest in technologies that can generate more natural motions is focused.
[24] As a result, in recent years, motion capturing technology has been introduced to capture a human motion as it is, to input natural numerical motion data into a computer, and to move a character using the motion data in order to obtain natural motion data quickly and realistically. .
[25] Currently, most of the motion capture systems used include attaching markers or sensors to the human body, and then measuring the position and direction of each joint of the human body by analyzing marker images or data from the sensors. According to the method of operation of the marker or sensor used here, it is classified into four types: ultrasonic, prosthetics, magnetic, and optical.
[26] First, the ultrasonic motion capture system is composed of a sensor generating a plurality of ultrasonic waves and three ultrasonic receivers, the ultrasonic sensors attached to each joint of the performer sequentially generates ultrasonic waves, the ultrasonic waves are received by the receiver Using the time taken to calculate the distance from the sensor to the receiver. The three-dimensional position of each sensor can be obtained by the triangulation principle using the values calculated at each of the three receivers.
[27] However, this method has the advantage of real-time processing and low cost because of the small amount of calculation required for location measurement without receiving much interference from the environment, but there is a problem that the sampling frequency is not high and the number of ultrasonic generators that can be used simultaneously is not high. have. In addition, since the sensor is large, the actor's operation may be unnatural, and there is a problem in that it is affected by the reflection of the ultrasonic wave due to the characteristics of the acoustic device.
[28] On the other hand, the prosthetic motion capture system is composed of a complex consisting of a slider and a potentiometer for measuring the joint movement of the performer, because it does not have a receiving device is not affected by interference or other environments. Thus, no initial set-up process is required, and no expensive studio environmental equipment is required. In particular, this method has the advantage that motion data can be obtained with a very high sampling frequency. However, this method requires a burdensome mechanism to be attached to the performer's body, which can lead to an unnatural operation, and causes a problem that the accuracy varies depending on how accurately the prosthetic device is located at each joint of the performer.
[29] In addition, the magnetic motion capture system attaches a sensor to measure the magnetic field at each joint of the performer, and calculates the change in the magnetic field measured by each sensor as a spatial change amount when the actor moves near the magnetic field generating device. It is a way of measuring movement. This method has the advantage that the device can be implemented at a low price and real time processing is possible.
[30] However, in the case of using a wired sensor in this method, there is a problem in that it restricts the actor's operation due to the cables connected from the sensor, which makes it difficult to express complex and fast movements naturally. In addition, even in the case of using a wireless sensor, the transmitter must be attached to the actor's body, and the operation radius is limited because it is possible only in the magnetic field region.
[31] The optical motion capture system attaches reflective markers that reflect infrared light to the main joint of the performer, and generates coordinates of the reflected markers using a camera with 2 to 16 infrared filters. Two-dimensional coordinates captured by multiple cameras are analyzed by a dedicated program, calculated in coordinates in three-dimensional space, and used for animation. One of the advantages of the optical system is its high sampling frequency, which is useful when capturing very fast movements, such as the movement of a sports player. In addition, since the size of the marker attached to the performer is small and light, the performer can express the motion freely, and since 200 or more markers can be processed, there is an effect that can simultaneously capture the motion of multiple performers.
[32] However, this method has a problem in that real-time motion capture becomes difficult because the trajectory of the marker is lost for a while when the marker is covered by another object, and a process of post-processing motion data is required.
[33] The present invention has been made in order to apply the above-described motion capture technology to the posture correction system of the electronic game or sports field, the object of which is to capture the user's motion in real time and compare and analyze the reference motion and the current motion It is to provide a three-dimensional motion analysis system that can induce the user's interest by informing the results by the score. If you can't capture all your motions in real time, you can visualize or analyze the approximate user motions online (while performing them) based on the information from a few markers and the original actor's motion to improve feedback. have.
[34] In addition, another object of the present invention is to provide a three-dimensional motion analysis system that can be used regardless of the place and time by the user using his computer by providing information such as reference motion using the Internet communication network have.
[1] 1 is a block diagram showing the configuration of a three-dimensional motion analysis system according to the present invention.
[2] 2 is a flowchart illustrating a three-dimensional motion analysis process according to the present invention.
[3] 3 is a diagram for explaining a method of detecting a motion of an actor and providing the same to a server system.
[4] 4 is a view showing an example of the configuration of the workbook presented in the present invention.
[5] FIG. 5 is an explanatory diagram specifically illustrating a method of detecting a learner's motion and comparing it with a performer's motion; FIG.
[6] 6 is an explanatory diagram illustrating a motion display method of the present invention using three-dimensional coordinate data and image data of a character.
[7] 7 is a frame structure diagram showing the structure of a three-dimensional image frame used in the present invention.
[8] 8 is a flowchart showing an order of processing retargeting.
[9] FIG. 9 is a procedure diagram illustrating a step of evaluating motion using the image frame shown in FIG. 7. FIG.
[10] 10 is a view showing an example in which three-dimensional image coordinate data according to the present invention is stored.
[11] 11 is an exemplary view showing an example of comparative analysis of three-dimensional image data according to the present invention.
[12] 12 illustrates an example of one screen in an operation of mutually comparing and analyzing three-dimensional image data in real time according to the present invention;
[13] FIG. 13 is a view showing an example of a screen in which front and rear, right and left operations of mutual comparison and analysis of 3D image data in real time according to the present invention are performed.
[14] <Description of Symbols for Main Parts of Drawings>
[15] 10: reference motion providing apparatus 11: server computer
[16] 12: reference data input unit 13: reference data database
[17] 20, 30: motion analysis device 21, 31: user computer
[18] 22, 32: user data input unit 23, 33: temporary data input unit
[35] According to a feature of the present invention for achieving the above object, 3 to display the motion of the original actor using the three-dimensional motion data of the original actor, according to the three-dimensional motion of the learner learning the same motion accordingly A method for analyzing three-dimensional motion data, comprising: a retargeting step of converting original actor three-dimensional motion data by reflecting a learner's body dimension and storing the converted (retargeted) original actor three-dimensional motion data and a three-dimensional motion of the original actor A learner motion data storing step of displaying data and a learner motion data storing step of storing learner's three-dimensional motion data learning the same motion while observing the motion of the original actor being displayed, and the converted original actor's three-dimensional motion data storing. 3D motion data comparison step to compare learner motion data It provides a three-dimensional motion data analysis method comprising a.
[36] The object of the present invention is to display a motion of the original actor using the three-dimensional motion data of the original actor, according to the three-dimensional motion data analysis system for comparing and analyzing the three-dimensional motion of the learner learning the same motion, And a retargeting frame for reflecting the difference between the body size of the performer and the body size of the learner, and a plurality of motion frames for the three-dimensional motion display of the original actor, wherein the plurality of motion frames include: a basic motion frame storing an actor specific motion; A retargeted with an original actor motion data storage unit including a general motion frame distinguished from the basic motion frame, and storing the original actor 3D motion data including identification information for distinguishing the basic motion frame from the normal motion frame Retargeting the original actor motion data Reference data input device including a data storage unit and a display device for displaying the motion data of the original actor and a plurality of cameras and capture sensors for capturing three-dimensional motion of the learner learning the same motion along the displayed motion of the original actor And retargeting the reference input data processor and the original actor motion data to database the data inputted by the reference data input device into three-dimensional digital data, and storing the retargeted original actor motion data storage unit and storing the retargeted circle. 3D motion data comprising a central processing unit for controlling the overall operation of the system, such as after analyzing the motion data of the actor and the 3D digital data of the learner, and displays the analysis results on the display device; Achievement is also possible with an analytical system.
[37] The above objects and various advantages of the present invention will become more apparent from the preferred embodiments of the present invention described below with reference to the accompanying drawings by those skilled in the art.
[38] Hereinafter, a preferred embodiment of a three-dimensional motion analysis system according to the present invention will be described in detail with reference to the accompanying drawings.
[39] 1 is a block diagram showing the configuration of a three-dimensional motion analysis system according to the present invention. Referring to FIG. 1, reference numeral 10 denotes a reference motion providing apparatus that receives a reference motion from an actor and makes a database and provides the user with a communication network, and 20 and 30 refer to the reference motion providing apparatus 10 through a communication network. A motion analysis device is connected to compare the current motion of the user with the reference motion.
[40] In this case, the reference motion providing apparatus 10 captures the motion data for each body part according to the motion of the server computer 11 for controlling the operation of the present invention and a broadcast entertainer or a famous sports player as an actor. A reference data input unit 12 provided to the server computer 11, and a reference data database 13 for inputting the reference data input unit 12 and storing the reference data data digitalized at the server computer 11; It is composed. At this time, the reference data database 13 stores not only motion capture data but also member data, various characters, statistical analysis applications and information data.
[41] In addition, the motion analysis apparatuses 20 and 30 are connected to the server computer 11 of the reference motion providing apparatus 10 through a communication network to perform the operation of comparing the current motion of the user with the reference motion 21. And a user data input unit 22 or 32 for capturing a current motion of the user and providing the same to the user computers 21 and 31. In this case, when the present invention is applied to a posture correction system in the sports field, it is preferable to further include temporary data input units 23 and 33 for providing temporary reference data. For example, comparing the raw data created by the coach with the data captured by the user, and compares the operation state and calculates the score.
[42] Referring to Figure 2 of the operation of the present invention having the configuration as described above in detail as follows.
[43] First, the operator of the server computer 11 uses the reference data input unit 12 to extract the actor's smoke, perform image processing to convert the motion capture data to the motion capture DB and store it.
[44] On the other hand, the music data corresponding to the captured operation is recorded, and a digital music file is generated and stored in the music file DB. In addition, a character to replace the actor's acting is produced and stored in the character DB.
[45] After each of the above processes, the image is synthesized and corrected from the motion capture DB, the music file is called from the music file DB to match the image, and the character is called from the character DB to change the real-time character to change the 3D graphic data. Create
[46] At this time, when the motion of the user is captured from the user data input units 22 and 32, the result is analyzed and evaluated by comparing the reference motion with the current motion. When the evaluation is made, the score is provided to the user so that one or more users can play the game.
[47] 3 is an explanatory diagram specifically illustrating a method of detecting a motion of an actor and providing the same to a server system. After attaching the N sensors 110 to the body of the performer 140 to start the smoke in a certain place, it detects through two or more cameras (100). At this time, the number of sensors 110 to be attached is preferably attached to all possible joints, the more the number of cameras, the more accurate data can be obtained.
[48] Information obtained by the camera is input to a real-time motion capture detector (130, Image Processing Board). The real-time motion capture detector 130 analyzes the image data obtained from each camera 100 and outputs real-time three-dimensional coordinate values for each entity. The output three-dimensional coordinate values are stored in the database system 13 storing the real-time three-dimensional image coordinates of each actor's entity and stored in the server system 11. At this time, the time interval for storing the data depends on the performance of the camera and the image processing speed of the system. In the present invention, it is assumed that t1. At this time, the raw actor's motion is post-processed to track as many sensors as possible in real time so that the user's sensor can be predicted where to move.
[49] Meanwhile, the raw image data 150 provided by the designer is processed to generate a virtual stage and a character, and then the character and the virtual stage database 160 are built and stored in the server system 11. The actor's three-dimensional image coordinate database 13, character and virtual stage database 160 constructed in this way are stored in the server system and then transmitted to the user system 21 upon request from the user system 21. .
[50] 4 is an example of the configuration of the workbook proposed by the present invention, and includes a plurality of cameras 100 and a monitor and a sound device capable of viewing a motion and a background screen of an actor. In FIG. 4, a predetermined stage area is provided, and when the learner takes motion in the area, the camera can accurately detect the learner's motion coordinates.
[51] If the learner is in the stage area at the beginning, it is necessary to match the learner's initial position with the performer's initial position. In this case, an initial scaffold identification position display device may be provided on the bottom of the stage, but preferably, a plurality of sensor positions attached to the learner are identified by a camera, and then the initial start position of the performer through the display device. Matching can be induced by displaying the and difference values. At this time, when the learner's initial position falls within the approximate range of the performer's initial position, the learner's motion can be started by displaying a "Start" signal on the display device. Next, the learner moves along with the displayed actor's motion.
[52] In this case, the monitor may be configured to display only the actor's or learner's motion in full screen, or simultaneously display the actor's and learner's motion. The digital camera specification usable in the present invention should have 3 million pixels, have a frame of 30 frames or more per second, and be able to take close-up photographs within 2 to 3 meters from the subject.
[53] FIG. 5 is an explanatory diagram specifically illustrating a method of detecting a learner's motion and comparing it with the actor's motion. The user system 21 has a character and virtual stage database 160 transmitted from the server system 11 and a three-dimensional coordinate value database 13 of the performer. In this case, although not shown in the drawing, after retargeting the actor's three-dimensional coordinate value database, it is preferable that a storage unit for storing the retargeted three-dimensional coordinate value database is provided. The actor's motion is displayed on the display device using the three-dimensional coordinate value database 13 stored in the user system 21 and the stored character database. In this case, the displayed actor may use image information configured as an entity of each body part of the performer, or may use image information for each entity of a character generated by a designer, such as a skeleton character shown in FIG. 3.
[54] The learner 200 attaches the N sensors 110 to the learner's body and starts acting according to the displayed actor's motion, and then detects them through two or more cameras 100.
[55] Information obtained by the camera is input to the real-time motion capture detector 130. The real-time motion capture detector 130 analyzes the image data obtained from each camera 100 and outputs real-time three-dimensional coordinate values for each entity. The output three-dimensional coordinate values are stored in the user system 21 in the form of a database 210. The user system 21 compares and analyzes the three-dimensional coordinate values for each entity of the performer and the three-dimensional coordinate values for each entity of the learner, evaluates them, and displays them on the display device.
[56] 6 is an explanatory diagram illustrating a motion display method of the present invention using three-dimensional coordinate data and image data of a character. In general video image data, the real-time video movement of an actor is stored in all dance movements by time. Therefore, in order to store the dance movement of the performer has a disadvantage that must store several tens of megabytes of data. In order to compensate for this disadvantage, the present invention uses a method of storing the change value of the real-time three-dimensional coordinates of the actor's dance motion and the image data for each entity of the actor separately. In order to play a 3D motion of a real performer, the real motion can be viewed by processing image data corresponding to each entity according to the change of real-time 3D coordinate data for each stored entity.
[57] 7 is a frame structure diagram showing the structure of an image frame of a three-dimensional original actor used in the present invention. (a) shows the motion frame of the original actor, (b) shows the motion frame of the learner. Each box represents a motion frame that is stored at unit time intervals, and the symbol above each box shows the frame number. It can be seen that the video frame of the original actor is composed of a retargeting frame, a start and end frame, and a motion frame. Motion frame is divided into basic motion frame and general motion frame, and the most impressive motion frame among motion frames is defined as basic motion frame. The basic motion frame information is stored at the point of time when motion data is generated or stored in a DB, and then marked (checked) by an expert (content creator) using a management program that designates a "specific attitude". The information on the generated basic motion frame is attached to the motion frame data file and stored or used separately in the DB. (a) Original actor basic motion frame No. 8, 12, and 15 are frame Nos. Of (b) indicating the learner's three-dimensional motion frames. Corresponds to 10, 15, and 20, respectively. As shown in FIG. 7, the delay is generated in each basic motion frame because the delay is generated when the learner follows the three-dimensional motion of the original actor. In order to prevent this delay, a preview window showing the three-dimensional motion of the original actor may be provided.
[58] Since there is a difference between the body size of the original actor and the body size of the learner, this problem must be solved before analyzing the motion. The process of solving this problem is called retargeting, and FIG. 8 shows an order of processing retargeting. The retargeting frame is to display the stationary motion of the original actor and induce the learner to take the same pose to extract the size difference between each element of the original actor and the learner and to reflect it on the 3D image coordinate data of the original actor. . Another function of the retargeting frame is to match the initial positions of the original actor and the learner.
[59] First, the body data of the original actor and the body data of the learner are obtained using the retargeting frame or other external data. After analyzing the two data, the motion data of the original actor is transformed to suit the body size of the learner. In general, the transformation expands the original actor's three-dimensional motion data, that is, the body part, which is the basis for obtaining the motion data (mostly based on the waist), as shown in FIG. 8. After zooming in or out, move the center. After the center movement, the motion data of the final retargeted original actor can be obtained by transforming to satisfy the constraint. As an example, the reason why the deformation is required to satisfy the constraint is to reduce the 3D motion data of the basketball player with 2m height and to reduce the 3D motion data of the basketball player with the waist in consideration of the body size of elementary school students 1m tall. When the foot part that should touch the ground will float. Therefore, the constraint "foot must touch the ground" should be given and the 3D motion data of the basketball player should be converted in consideration of this constraint. In general, when converting 3D motion data, in addition to retargeting, a self-intersection problem should be solved. However, when changing motion data of a person having a similar body dimension to motion data of the same person as in the present invention, There is no need to consider.
[60] The three-dimensional motion comparison method of the present invention can be largely classified into four steps. The first step is a retargeting step of comparing the original actor and the learner's body dimensions and the initial position by using the retargeting frame, and then converts the 3D image coordinate data of the original actor using the comparison value, and the second step is the original actor Displaying the motion data before the retargeting conversion of the step, step 3 is to store the motion of the learner to postpone according to the motion data of the original actor is displayed, step 4 is the motion data of the retargeted original actor and the stored learner Comparing the motion data. Hereinafter, each step will be described in more detail.
[61] FIG. 9 is a flowchart illustrating a step of evaluating motion using the image frame shown in FIG. 7. First, the retargeting frame is used to compare the original actor and learner's body dimensions and initial position. In this case, the physical dimension data of the original actor and the learner may use external data. The comparison value is used to transform (retarget) the 3D image coordinate data of the original actor. (s2 step)
[62] Then, the three-dimensional motion data of the original actor is displayed. (Step s3) The displayed motion of the original actor may display three-dimensional motion data of the retargeted original actor, but preferably displaying the original actor's three-dimensional motion data before retargeting faithfully displays the motion of the original actor. There is an advantage to this. In this case, it is desirable to provide a function for previewing the 3D motion data of the original actor so that the learner can prepare for the next motion in advance. The learner takes motion along the displayed three-dimensional motion of the original actor and these actions are stored in real time. (s4 step)
[63] After determining whether the displayed original actor 3D motion frame is the basic motion frame (step s5), if it is not the basic motion frame, the steps s3 and s4 are continuously repeated. In the case of a basic motion frame, an absolute coordinate difference value between the basic motion frame of the retargeted original actor and the stored motion frame of the learner is calculated. In step S6, the learner's basic motion frame corresponding to the original motion frame of the original actor is evicted from the absolute coordinate difference value. Since the eviction has a delay between the original actor's motion and the learner's motion as described with reference to FIG. 7, the eviction of the learner's motion frame occurs within an arbitrary number of frames generated after the basic motion frame of the original actor. After the coordinate values are compared with the basic motion frame of the original actor, a method in which an absolute difference value of the coordinate values is within a predetermined range may be extracted as the basic frame of the learner. In this case, the arbitrary number of frames may be selected according to the user's ability. In other words, if you are an advanced user, you can check whether the learner's basic motion frame is generated within three frames generated after the original actor's basic motion, and if you are a beginner user, you can select more than three frames. Make sure Referring to the case of FIG. 7, the eighth motion frame of the original performer is delayed to the tenth motion frame of the learner, the twelfth motion frame to the fifteenth motion frame, and the fifteenth motion frame to the 20th motion frame. A method of calculating the motion of the original actor and the absolute coordinate change amount will be described with reference to FIG. 11.
[64] The cumulative value of the coordinate absolute difference between the basic motion frames determined in this manner is stored. (Step s8) When the basic motion frame is selected instead of accumulating the absolute coordinate difference value, the user may have a separate mark on the corresponding entity of each motion frame to indicate that the learner correctly followed the motion. In other words, you can circle or visually mark special coordinates of the matched entity to indicate that it matches the motion of the original actor. After checking whether it is the last frame (step s9), if it is not the last frame, step s3 is started, and in the case of the last frame, the total score is displayed by using the accumulated amount of absolute coordinate change. In this case, it is very easy for a person skilled in the art to express variously by various techniques instead of simply displaying the total points.
[65] 10 is a diagram illustrating an example in which three-dimensional image coordinate data according to the present invention is stored. Each part of the actor was defined as an entity using joint units, and the coordinates of X, Y, and Z were read according to the movement, and compared using the difference in phase change. The actor reads and stores the three-dimensional coordinates of the N entities in 0.01 second intervals. For example, entity 1 detects the position of the actor's right cuff. At start (t = 0), entity 1's coordinate position is (x, y, z) = (9,45, -20), and t It should be changed to (7,54, -15) at time = 0.01, and changed to the position of (12,55, -5) at time t = 0.02, and the coordinate value of each entity per time should be stored.
[66] 11 is an exemplary view showing an example of comparative analysis of three-dimensional image data according to the present invention. Object A represents the actor and Object B represents the learner. The coordinate of entity 1 of subject A is (9,45, -20)-> (7,54, -15)-> (12,55) when the time changes from 0.00-> 0.01-> 0.02 , -5), and the coordinates of entity 1 of subject B are changed to (9,45, -20)-> (10,58, -18)-> (14,69, -8) Is displayed.
[67] In the lower part of FIG. 11, phase difference values of real-time three-dimensional image data of targets A and B are displayed. In Table 1, the first entity three-dimensional coordinate values of t1 (t = 0.00), t2 (t = 0.01), and t3 (t = 0.02) of the object A (smoker) and the object B (learner) are displayed.
[68] Target ATarget Bxyz xyz t1945-20t1945-20 t2754-15t21058-18 t31255-5t31469-8
[69] In this case, assuming that t2 of the target A represents the basic motion frame and that the total limit of the absolute coordinate difference values that determine the basic motion frame is 15, Table 2 shows how to extract the corresponding basic motion frame of the target B. Explain. When the first entity's three-dimensional coordinate value at t2 (t = 0.01) of the object A (actuator) is referenced, the absolute coordinate difference value of the object B and its total value are respectively displayed.
[70] Absolute coordinate difference value (A basic motion frame: t2)xyz∑ t23 (10-7)4 (58-54)3 (-18 + 15)10 t37 (14-7)15 (69-54)7 (-8 + 15)29
[71] At this time, as described in Table 2, after examining the total sum of the coordinate difference values of the target B, the frame below the predetermined value (15 in this description) is evicted as the base motion frame of the target B corresponding to the base motion frame of the target A. can do. (T2 frame of the target B in Table 2) In Table 2, only the first entity has been described as a reference. However, in the case of actual application, it is preferable to consider the absolute coordinate difference value of all entities.
[72] 12 illustrates an example of one screen in an operation of mutually comparing and analyzing 3D image data in real time according to the present invention. During the evaluation, a screen as shown in FIG. 12 is displayed on the screens of the user computers 21 and 31 so that the user can easily check the screen. That is, the reference motion is provided in Frame A, the current motion is provided in Frame B so that the user can be compared, and the points at which there are differences are indicated by arrows or the like so that the user can easily check. In addition, a control bar is provided so that a user can easily compare motion capture data of a desired temporal position.
[73] FIG. 13 is a diagram illustrating a screen example of an operation of mutually comparing and analyzing 3D image data in real time according to the present invention. That is, by simultaneously displaying the actor's before, after, left and right views of the performer and the learner, the learner can compare his appearance with the performer more accurately. Even in this case, a control bar is provided so that a user can easily compare motion capture data of a desired temporal position.
[74] As described above, the three-dimensional motion analysis system of the present invention captures the user's motion in real time, compares the reference motion with the current motion, and analyzes the result. There is.
[75] In addition, by providing information such as a reference motion using the Internet communication network has the effect that the general user can use any computer regardless of place and time.
[76] While the invention has been shown and described with respect to particular embodiments, it will be apparent to those skilled in the art that various modifications and changes can be made without departing from the spirit and scope of the invention as indicated by the appended claims. Anyone can grow up easily.
[77] For example, the present invention can be applied to the aiming and posture correction in the defense field, rehabilitation and posture correction in the medical field, broadcasting, film, animation production, games, posture correction in various sports, education, and the like.
权利要求:
Claims (6)
[1" claim-type="Currently amended] In the three-dimensional motion data analysis method of displaying the motion of the original actor using the three-dimensional motion data of the original actor, and accordingly to compare and analyze the three-dimensional motion of the learner learning the same motion,
Retargeting the original actor three-dimensional motion data by reflecting the learner's body dimensions and then storing the converted (retargeted) original actor three-dimensional motion data;
Displaying the original actor's three-dimensional motion data;
A learner motion data storage step of storing three-dimensional motion data of a learner learning the same motion while observing the motion of the original actor displayed; And
And comparing the converted original actor three-dimensional motion data with the stored learner motion data.
[2" claim-type="Currently amended] The method of claim 1,
The original actor three-dimensional motion data includes at least one basic motion frame for designating a special motion of the original actor, the step of comparing the three-dimensional motion data
Calculating an absolute difference value of coordinates between the basic motion frame of the original actor displayed in the motion display of the original actor and the motion frame of the learner; And
And extracting a frame whose calculated absolute difference value is equal to or less than a predetermined threshold value as a learner's basic motion frame.
[3" claim-type="Currently amended] The basic motion frame extraction step of claim 2
3. The method of claim 3, wherein the learner motion frame is selected by a learner motion frame generated after a predetermined number of frames based on the basic motion frame of the original actor.
[4" claim-type="Currently amended] The method of claim 3, wherein
3. The method of claim 3, wherein the arbitrary number of frames is selectable for each student's grade.
[5" claim-type="Currently amended] In the retargeting step of claim 1,
Receiving the body dimensions of the original actor and the body dimensions of the learner;
3D motion data characterized by converting the 3D motion data of the original actor, including the step of expanding or reducing the 3D motion date of the original actor based on a part of the body after obtaining the difference value of the body dimensions. Analytical Method.
[6" claim-type="Currently amended] In the three-dimensional motion data analysis system for displaying the motion of the original actor using the three-dimensional motion data of the original actor, according to the comparative analysis of the three-dimensional motion of the learner learning the same motion,
It includes a retargeting frame for reflecting the difference between the body size of the original actor and the body size of the learner and a plurality of motion frames for the three-dimensional motion display of the original actor, the plurality of motion frame is a basic motion frame for storing the actor-specific motion And an original actor motion data storage unit including a general motion frame distinguished from the basic motion frame, and storing three-dimensional motion data of the original actor including identification information for distinguishing the basic motion frame and the general motion frame;
A retargeting original actor motion data storage unit for storing the retargeted original actor motion data;
A display device for displaying motion data of the original actor;
A reference data input device having a plurality of cameras and a capture sensor for capturing three-dimensional motion of a learner learning the same motion along the displayed motion of the original actor;
A reference input data processor for databaseting the data inputted by the reference data input device into three-dimensional digital data;
After retargeting the original actor motion data and storing the retargeting original actor motion data storage unit, comparing and analyzing the motion data of the retargeted original actor and the learner's 3D digital data, and displaying the analysis result. And a central processing unit for calculating and controlling the overall operation of the system such as displaying on the device.
类似技术:
公开号 | 公开日 | 专利标题
US10462383B2|2019-10-29|System and method for acquiring virtual and augmented reality scenes by a user
US10015478B1|2018-07-03|Two dimensional to three dimensional moving image converter
Craig2013|Understanding augmented reality: Concepts and applications
CN104011788B|2016-11-16|For strengthening and the system and method for virtual reality
CN104471511B|2018-04-20|Identify device, user interface and the method for pointing gesture
CN102681661B|2015-05-20|Using a three-dimensional environment model in gameplay
KR101876419B1|2018-07-10|Apparatus for providing augmented reality based on projection mapping and method thereof
US8314840B1|2012-11-20|Motion analysis using smart model animations
CN104781849B|2018-05-25|Monocular vision positions the fast initialization with building figure | simultaneously
Billinghurst et al.2015|A survey of augmented reality
US9245177B2|2016-01-26|Limiting avatar gesture display
Casas et al.2012|A Kinect-based Augmented Reality System for Individuals with Autism Spectrum Disorders.
Duarte et al.2010|Capturing complex human behaviors in representative sports contexts with a single camera
KR20150013709A|2015-02-05|A system for mixing or compositing in real-time, computer generated 3d objects and a video feed from a film camera
US8667519B2|2014-03-04|Automatic passive and anonymous feedback system
US8824736B2|2014-09-02|Systems and methods for augmenting a real scene
CN102735100B|2014-07-09|Individual light weapon shooting training method and system by using augmented reality technology
CN1672120B|2010-04-28|Man-machine interface using a deformable device
US7084874B2|2006-08-01|Virtual reality presentation
CN102270275B|2017-03-01|The method of selecting object and multimedia terminal in virtual environment
CN102194105B|2014-03-19|Proxy training data for human body tracking
KR101238608B1|2013-02-28|A system and method for 3D space-dimension based image processing
KR100869447B1|2008-11-21|Apparatus and method for indicating a target by image processing without three-dimensional modeling
CN106095089A|2016-11-09|A kind of method obtaining interesting target information
Alexiadis et al.2011|Evaluating a dancer's performance using kinect-based skeleton tracking
同族专利:
公开号 | 公开日
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
2000-04-12|Application filed by 박명수, 주식회사 닷에이스
2000-04-12|Priority to KR1020000019366A
2001-11-07|Publication of KR20010095900A
优先权:
申请号 | 申请日 | 专利标题
KR1020000019366A|KR20010095900A|2000-04-12|2000-04-12|3D Motion Capture analysis system and its analysis method|
[返回顶部]